Generalization Error and Algorithmic Convergence of Median Boosting

نویسنده

  • Balázs Kégl
چکیده

We have recently proposed an extension of ADABOOST to regression that uses the median of the base regressors as the final regressor. In this paper we extend theoretical results obtained for ADABOOST to median boosting and to its localized variant. First, we extend recent results on efficient margin maximizing to show that the algorithm can converge to the maximum achievable margin within a preset precision in a finite number of steps. Then we provide confidence-interval-type bounds on the generalization error.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Confidence-rated Regression by Localized Median Boosting

In this paper we describe and analyze LOCMEDBOOST, an algorithm that boosts regressors with input dependent weights. The algorithm is a synthesis of median boosting [1] and localized boosting [2, 3, 4], and unifies the advantages of the two approaches. We prove boostingtype convergence of the algorithm and give clear conditions for the convergence of the robust training error, where robustness ...

متن کامل

Algorithmic Learning Theory , 1999 . Theoretical Views of Boosting

Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm , we brieey survey theoretical work on boosting including analyses of AdaBoost's training error and generalization error, connections between boosting and game theory, methods of estimating probabilities using boosting, and extensions of AdaBoost for multiclass c...

متن کامل

The University of Chicago Algorithmic Stability and Ensemble-based Learning a Dissertation Submitted to the Faculty of the Division of the Physical Sciences in Candidacy for the Degree of Doctor of Philosophy Department of Computer Science by Samuel Kutin

We explore two themes in formal learning theory. We begin with a detailed, general study of the relationship between the generalization error and stability of learning algorithms. We then examine ensemble-based learning from the points of view of stability, decorrelation, and threshold complexity. A central problem of learning theory is bounding generalization error. Most such bounds have been ...

متن کامل

Lecture 9: Boosting

Last week we discussed some algorithmic aspects of machine learning. We saw one very powerful family of learning algorithms, namely nonparametric methods that make very weak assumptions on that data-generating distribution, but consequently have poor generalization error/convergence rates. These methods tend to have low approximation errors, but extremely high estimation errors. Then we saw som...

متن کامل

Re-scale boosting for regression and classification

Boosting is a learning scheme that combines weak prediction rules to produce a strong composite estimator, with the underlying intuition that one can obtain accurate prediction rules by combining “rough” ones. Although boosting is proved to be consistent and overfittingresistant, its numerical convergence rate is relatively slow. The aim of this paper is to develop a new boosting strategy, call...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004